9 research outputs found

    Data-Driven Artificial Intelligence for Calibration of Hyperspectral Big Data

    Get PDF
    Near-earth hyperspectral big data present both huge opportunities and challenges for spurring developments in agriculture and high-throughput plant phenotyping and breeding. In this article, we present data-driven approaches to address the calibration challenges for utilizing near-earth hyperspectral data for agriculture. A data-driven, fully automated calibration workflow that includes a suite of robust algorithms for radiometric calibration, bidirectional reflectance distribution function (BRDF) correction and reflectance normalization, soil and shadow masking, and image quality assessments was developed. An empirical method that utilizes predetermined models between camera photon counts (digital numbers) and downwelling irradiance measurements for each spectral band was established to perform radiometric calibration. A kernel-driven semiempirical BRDF correction method based on the Ross Thick-Li Sparse (RTLS) model was used to normalize the data for both changes in solar elevation and sensor view angle differences attributed to pixel location within the field of view. Following rigorous radiometric and BRDF corrections, novel rule-based methods were developed to conduct automatic soil removal; and a newly proposed approach was used for image quality assessment; additionally, shadow masking and plot-level feature extraction were carried out. Our results show that the automated calibration, processing, storage, and analysis pipeline developed in this work can effectively handle massive amounts of hyperspectral data and address the urgent challenges related to the production of sustainable bioenergy and food crops, targeting methods to accelerate plant breeding for improving yield and biomass traits

    Assessing the Impacts of Anthropogenic Drainage Structures on Hydrologic Connectivity Using High-Resolution Digital Elevation Models

    No full text
    Stream flowline delineation from high-resolution digital elevation models (HRDEMs) can be problematic due to the fine representation of terrain features as well as anthropogenic drainage structures (e.g., bridges, culverts) within the grid surface. The anthropogenic drainage structures (ADS) may create digital dams while delineating stream flowlines from HRDEMs. The study assessed the effects of ADS locations, spatial resolution (ranged from 1m to 10m), depression processing methods, and flow direction algorithms (D8, D-Infinity, and MFD-md) on hydrologic connectivity through digital dams using HRDEMs in Nebraska. The assessment was conducted based on the offset distances between modeled stream flowlines and original ADS locations using kernel density estimation (KDE) and calculated frequency of ADS samples within offset distances. Three major depression processing techniques (i.e., depression filling, stream breaching, and stream burning) were considered for this study. Finally, an automated method, constrained burning was proposed for HRDEMs which utilizes ancillary datasets to create underneath stream crossings at possible ADS locations and perform DEM reconditioning. The results suggest that coarser resolution DEMs with depression filling and breaching can produce better hydrologic connectivity through ADS compared with finer resolution DEMs with different flow direction algorithms. It was also found that stream burning with known stream crossings at ADS locations outperformed depression filling and breaching techniques for HRDEMs in terms of hydrologic connectivity. The flow direction algorithms combining with depression filling and breaching techniques do not have significant effects on the hydrologic connectivity of modeled stream flowlines. However, for stream burning methods, D8 was found as the best performing flow direction algorithm in HRDEMs with statistical significance. The stream flowlines delineated using the proposed constrained burning method from the HRDEM was found better than depression filling and breaching techniques. This method has an overall accuracy of 78.82% in detecting possible ADS locations within the study area

    UAV Multisensory Data Fusion and Multi-Task Deep Learning for High-Throughput Maize Phenotyping

    No full text
    Recent advances in unmanned aerial vehicles (UAV), mini and mobile sensors, and GeoAI (a blend of geospatial and artificial intelligence (AI) research) are the main highlights among agricultural innovations to improve crop productivity and thus secure vulnerable food systems. This study investigated the versatility of UAV-borne multisensory data fusion within a framework of multi-task deep learning for high-throughput phenotyping in maize. UAVs equipped with a set of miniaturized sensors including hyperspectral, thermal, and LiDAR were collected in an experimental corn field in Urbana, IL, USA during the growing season. A full suite of eight phenotypes was in situ measured at the end of the season for ground truth data, specifically, dry stalk biomass, cob biomass, dry grain yield, harvest index, grain nitrogen utilization efficiency (Grain NutE), grain nitrogen content, total plant nitrogen content, and grain density. After being funneled through a series of radiometric calibrations and geo-corrections, the aerial data were analytically processed in three primary approaches. First, an extended version normalized difference spectral index (NDSI) served as a simple arithmetic combination of different data modalities to explore the correlation degree with maize phenotypes. The extended NDSI analysis revealed the NIR spectra (750–1000 nm) alone in a strong relation with all of eight maize traits. Second, a fusion of vegetation indices, structural indices, and thermal index selectively handcrafted from each data modality was fed to classical machine learning regressors, Support Vector Machine (SVM) and Random Forest (RF). The prediction performance varied from phenotype to phenotype, ranging from R2 = 0.34 for grain density up to R2 = 0.85 for both grain nitrogen content and total plant nitrogen content. Further, a fusion of hyperspectral and LiDAR data completely exceeded limitations of single data modality, especially addressing the vegetation saturation effect occurring in optical remote sensing. Third, a multi-task deep convolutional neural network (CNN) was customized to take a raw imagery data fusion of hyperspectral, thermal, and LiDAR for multi-predictions of maize traits at a time. The multi-task deep learning performed predictions comparably, if not better in some traits, with the mono-task deep learning and machine learning regressors. Data augmentation used for the deep learning models boosted the prediction accuracy, which helps to alleviate the intrinsic limitation of a small sample size and unbalanced sample classes in remote sensing research. Theoretical and practical implications to plant breeders and crop growers were also made explicit during discussions in the studies

    Field-scale crop yield prediction using multi-temporal WorldView-3 and PlanetScope satellite data and deep learning

    No full text
    Agricultural management at field-scale is critical for improving yield to address global food security, as providing enough food for the world\u27s growing population has become a wicked problem for both scientists and policymakers. County- or regional-scale data do not provide meaningful information to farmers who are interested in field-scale yield forecasting for effective and timely field management. No studies directly utilized raw satellite imagery for field-scale yield prediction using deep learning. The objectives of this paper were twofold: (1) to develop a raw imagery-based deep learning approach for field-scale yield prediction, (2) investigate the contribution of in-season multitemporal imagery for grain yield prediction with hand-crafted features and WorldView-3 (WV) and PlanetScope (PS) imagery as the direct input, respectively. Four WV-3 and 25 PS imagery collected during the growing season of soybean were utilized. Both 2-dimensional (2D) and 3-dimensional (3D) convolution neural network (CNN) architectures were developed that integrated spectral, spatial, temporal information contained in the satellite data. For comparison, hundreds of carefully selected spectral, spatial, textural, and temporal features that are optimal for crop growth monitoring were extracted and fed into the same deep learning model. Our results demonstrated that (1) deep learning was able to predict yield directly using raw satellite imagery to the extent that was comparable to feature-fed deep learning approaches; (2) both 2D and 3D CNN models were able to explain nearly 90% variance in field-scale yield; (3) limited number of WV-3 outperformed multi-temporal PS data collected during entire growing season mainly attributed to RedEdge and SWIR bands available with WV-3; and (4) 3D CNN increased the prediction power of PS data compared to 2D CNN due to its ability to digest temporal features extracted from PS data

    Upconversion luminescence in Tm3+/Yb3+co-doped double-clad silica fibers under 980 nm cladding pumping

    No full text
    An investigation is reported of the visible and near-infrared upconversions in Tm 3+/Yb 3+ co-doped double-clad silica fibers (TYDFs) under excitation at 980nm. The TYDFs used were fabricated using the modified chemical vapor deposition (MCVD) and solution doping techniques. Three distinct upconversion luminescences were observed at wavelengths of 482, 649 and 816nm and their intensities found to increase with Yb 3+ concentration. The intensity of blue and red fluorescence bands at 482 and 649nm were found to be the highest with LTY-8 fiber, which had Tm 3+ and Yb 3+ concentrations of 5.610 19 and 15.510 19 ions/cc, respectively. The upconversion luminescence intensity was also observed to decrease with an increase in temperature. The main emission switched from 482nm to 816nm as the temperature increased above 200C
    corecore